Purpose

This script takes a deep dive into Landsat 8 labels for a more rigorous analysis of inconsistent band data and outliers in the filtered label dataset. Here we will determine if any more label data points should be removed from the training dataset and whether or not we can glean anything from the metadata in the outlier dataset to be able to pre-emptively toss out scenes when we go to apply the classification algorithm.

harmonize_version = "v2024-04-17"
outlier_version = "v2024-04-17"

LS8 <- read_rds(paste0("data/labels/harmonized_LS89_labels_", harmonize_version, ".RDS")) %>% 
  filter(mission == "LANDSAT_8")

Check for mis-matched band data between user data and re-pull

Just look at the data to see consistent (or inconsistent) user-pulled data and our pull, here, our user data are in “BX” format and the re-pull is in “SR_BX” format. These are steps to assure data quality if the volunteer didn’t follow the directions explicitly.

pmap(.l = list(user_band = LS89_user,
               ee_band = LS89_ee,
               data = list(LS8),
               mission = list("LANDSAT_8")),
     .f = make_band_comp_plot)
## [[1]]

## 
## [[2]]

## 
## [[3]]

## 
## [[4]]

## 
## [[5]]

## 
## [[6]]

There isn’t a ton of mis-match here, we’ll just use B7/SR_B7 as a reference to filter inconsistent labels

LS8_inconsistent <- LS8 %>% 
  filter((is.na(SR_B7) | SR_B7 != B7))

LS8_inconsistent %>% 
  group_by(class) %>% 
  summarise(n_labels = n()) %>% 
  kable()
class n_labels
cloud 5
lightNearShoreSediment 1
offShoreSediment 3
shorelineContamination 1

Most of these are cloud labels, where the pixel is saturated, and then masked in the re-pull value (resulting in an NA). Let’s drop those from this subset and then look more.

LS8_inconsistent <- LS8_inconsistent %>% 
  filter(!(class == "cloud" & is.na(SR_B7)))

This leaves 0.9% of the Landsat 7 labels as inconsistent. Let’s do a quick sanity check to make sure that we’ve dropped values that are inconsistent between pulls:

LS8_filtered <- LS8 %>% 
  filter(# filter data where SR_B7 has data and where the values match between the two
         # pulls.
         (!is.na(SR_B7) & SR_B7 == B7) | 
           # or where the user-specified class is cloud and the pixel was saturated
           # providing no surface refelctance data
           (class == "cloud" & is.na(SR_B7)),
         # or where any re-pulled band value is greater than 1, which isn't a valid value
         if_all(LS89_ee,
                ~ . <= 1))

And plot:

## [[1]]

## 
## [[2]]

## 
## [[3]]

## 
## [[4]]

## 
## [[5]]

## 
## [[6]]

Looks good!

And now let’s look at the data by class:

## [[1]]

## 
## [[2]]

## 
## [[3]]

## 
## [[4]]

## 
## [[5]]

## 
## [[6]]

We aren’t actually modeling “other” (not sufficient observations to classify) or “shorelineContamination” (we’ll use this later to block areas where there is likely shoreline contamination in the AOI). Additionally, the “algalBloom” labels don’t have sufficient n (nor do we think these are necessarily visible), so let’s drop those categories and look at the data again.

LS8_for_class_analysis <- LS8_filtered %>% 
  filter(!(class %in% c("other", "shorelineContamination", "algalBloom")))
## [[1]]

## 
## [[2]]

## 
## [[3]]

## 
## [[4]]

## 
## [[5]]

## 
## [[6]]

Interesting - the classes look really similar in distribution (maybe because cloud categories are so high). It will be interesting to see if there are statistical differences.

Check for systemic volunteer inconsistencies

Let’s also go back and check to see if there is any pattern to the inconsistent labels.

vol_init n_tot_labs n_dates
HAD 2 2
LRCP 3 2
MRB 3 2
SKS 2 1

I’m not concerned about any systemic errors here that might require modified data handling for a specific scene or contributor, labels are spread amongst volunteers and scenes.

Outlier handling

There are statistical outliers within this dataset and they may impact the interpretation of any statistical testing we do. Let’s see if we can narrow down when those outliers and/or glean anything from the outlier data that may be applicable to the the application of the algorithm. Outliers may be a systemic issue (as in the scene is an outlier), it could be a user issue (a user may have been a bad actor), or they just might be real. This section asks those questions. The “true outliers” that we dismiss from the dataset will also be used to help aid in interpretation/application of the algorithm across the Landsat stack, so it is important to make notes of any patterns we might see in the outlier dataset.

## [1] "Classes represented in outliers:"
## [1] "darkNearShoreSediment" "offShoreSediment"      "openWater"

Okay, 21 outliers (>1.5*IQR) out of 989 - and they are all from non-cloud groups, and non of them are light near shore sediment.

How many of these outliers are in specific scenes?

LS8_out_date <- outliers %>% 
  group_by(date, vol_init) %>% 
  summarize(n_out = n())
LS8_date <- LS8_for_class_analysis %>% 
  filter(class != "cloud") %>% 
  group_by(date, vol_init) %>% 
  summarise(n_tot = n())
LS8_out_date <- left_join(LS8_out_date, LS8_date) %>% 
  mutate(percent_outlier = n_out/n_tot*100) %>% 
  arrange(-percent_outlier)
LS8_out_date %>% 
  kable()
date vol_init n_out n_tot percent_outlier
2022-11-21 HAD 10 12 83.3333333
2020-04-05 AMP 2 9 22.2222222
2022-08-01 MRB 5 31 16.1290323
2014-06-08 LRCP 2 193 1.0362694
2020-07-10 MRB 1 98 1.0204082
2020-08-11 HAD 1 135 0.7407407

There are three scenes here that have very high outliers - perhaps there is something about the AC in these particular scenes? or the general scene quality?

LS8_out_date %>% 
  filter(percent_outlier > 20) %>% 
  select(date, vol_init) %>% 
  left_join(., LS8) %>% 
  select(date, vol_init, DATA_SOURCE_AIR_TEMPERATURE:max_cloud_cover) %>% 
  distinct() %>% 
  kable()
date vol_init DATA_SOURCE_AIR_TEMPERATURE DATA_SOURCE_ELEVATION DATA_SOURCE_OZONE DATA_SOURCE_PRESSURE DATA_SOURCE_REANALYSIS DATA_SOURCE_TIRS_STRAY_LIGHT_CORRECTION DATA_SOURCE_WATER_VAPOR NADIR_OFFNADIR CLOUD_COVER_list IMAGE_QUALITY_OLI_list IMAGE_QUALITY_TIRS_list mean_cloud_cover max_cloud_cover
2022-11-21 HAD MODIS GLS2000 MODIS Calculated GEOS-5 FP-IT TIRS MODIS NADIR 44.68 9 9 44.680 44.68
2020-04-05 AMP MODIS GLS2000 MODIS Calculated GEOS-5 FP-IT TIRS MODIS NADIR 26.31; 20.62 9 9 23.465 26.31

Image quality is high across the board, but the 2022-11-21 image has moderate cloud cover, and is almost entirely outliers. Let’s look at that scene:

Another case of high cloud cover and adjacent snow! We should definitely toss this scene. For consistency, let’s look at instances where outliers are in at least three bands for a given label:

date class vol_init user_label_id n_bands_out bands_out
2022-08-01 openWater MRB 1030 4 SR_B2; SR_B3; SR_B4; SR_B5
2022-08-01 openWater MRB 1031 4 SR_B2; SR_B3; SR_B4; SR_B5
2022-08-01 openWater MRB 1032 4 SR_B2; SR_B3; SR_B4; SR_B5
2022-08-01 openWater MRB 1033 4 SR_B2; SR_B3; SR_B4; SR_B5
2022-08-01 openWater MRB 1034 4 SR_B2; SR_B3; SR_B4; SR_B5

Let’s group by image date and volunteer and tally up the number of labels where at least 3 bands where outliers:

date vol_init n_labels
2022-08-01 MRB 5

Interesting - let’s look at this scene too.

This scene has the weird green cloud situation happening and the south-extending NA data on the west side of the AOI. Let’s look at image quality here:

LS8_for_class_analysis %>% 
  filter(date == "2022-08-01") %>% 
  pluck("IMAGE_QUALITY_OLI_list") %>% 
  unique() %>% 
  unlist()
## [1] "9"

That’s not helpful - the image quality is the highest it can be.

QA Pixels

Do any of the labels have QA pixel indications of cloud or cloud shadow? The first pass here is for all data that don’t have a label of “cloud” (not just outliers). Let’s see if the low certainty classification in the QA band is useful here (there is no medium certainty for LS8/9)

LS8_for_class_analysis %>% 
  mutate(QA = case_when(str_sub(QA_PIXEL_binary, 1, 2) %in% c(01, 11)  ~ "cirrus",
                   str_sub(QA_PIXEL_binary, 3, 4) %in% c(01, 11)  ~ "snow/ice",
                   str_sub(QA_PIXEL_binary, 5, 6) %in% c(01, 11)  ~ "cloud shadow",
                   str_sub(QA_PIXEL_binary, 7, 8) %in% c(01, 11)  ~ "cloud",
                   TRUE ~ "clear")) %>% 
  group_by(QA) %>% 
  filter(class != "cloud") %>% 
  summarize(n_tot = n()) %>% 
  kable()
QA n_tot
clear 569
cloud shadow 20

Low confidence is much better than medium confidence in LS8 than 5/7 - let’s check to see that the classes are the same for high confidence:

LS8_for_class_analysis %>% 
  mutate(QA = case_when(str_sub(QA_PIXEL_binary, 1, 2) == 11 ~ "cirrus",
                   str_sub(QA_PIXEL_binary, 3, 4) == 11 ~ "snow/ice",
                   str_sub(QA_PIXEL_binary, 5, 6)  == 11 ~ "cloud shadow",
                   str_sub(QA_PIXEL_binary, 7, 8)  == 11 ~ "cloud",
                   TRUE ~ "clear")) %>% 
  group_by(QA) %>% 
  filter(class != "cloud") %>% 
  summarize(n_tot = n()) %>% 
  kable()
QA n_tot
clear 569
cloud shadow 20

They are the same! Let’s look at the cloud shadow group to see if there is anything egregious:

LS8_for_class_analysis %>% 
  filter(str_sub(QA_PIXEL_binary, 5, 6) == 11) %>% 
  group_by(date, vol_init) %>% 
  summarise(n_cloud_shadow = n()) %>% 
  arrange(-n_cloud_shadow) %>% 
  kable()
date vol_init n_cloud_shadow
2022-11-21 HAD 28
2016-09-01 AMP 3
2020-04-05 AMP 3
2017-09-04 FYC 2
2020-07-10 MRB 1
2022-08-01 MRB 1

We already know that the highest ranked cloud shadow scene here is also one we are going to drop, so I don’t think there is anything else to pursue here.

Clouds

How many of these outliers have near-pixel clouds (as measured by ST_CDIST)?

There are 1 labels (4.8% of oultiers) that aren’t “cloud” in the outlier dataset that have a cloud distance <500m and 32 labels (3.2%) in the whole dataset that have a cloud distance <500m. Since this is about the same portion of labels (or they are not severely disproportionate), I don’t think this is terribly helpful.

How many of the outliers have high cloud cover, as reported by the scene-level metadata? Note, we don’t have the direct scene cloud cover associated with individual labels, rather a list of the scene level cloud cover values associated with the AOI.

The outlier dataset contains 0 (0%) where the max cloud cover was > 75% and 0 (0%) where the mean cloud cover was > 50%. The filtered dataset contains 0 (0%) where max was >75% and 0 (0%) where the mean cloud cover was > 50%. Welp, this is unhelpful!

RadSat QA bit

Pixels can also be saturated in one or more bands, we need to make sure that the QA_RADSAT for all labels (including clouds) are set to zero. During the re-pull, we masked satruated pixels, so this should be zero.

LS8_for_class_analysis %>% 
  mutate(radsat = if_else(QA_RADSAT == 0,
                           "n",
                           "y")) %>% 
  group_by(radsat) %>% 
  summarize(n_tot = n()) %>% 
  kable()
radsat n_tot
n 989

Great! No bands are saturated!

Aerosol QA bit

Landsat 8 and 9 feature an Aerosol QA band, derived from Band 1. We should look through the data here to see if any of the labels are in high aerosol QA pixels, which the USGS suggests should not be used.

LS8_for_class_analysis %>% 
  mutate(aerosol = if_else(str_sub(SR_QA_AEROSOL_binary, 1, 2) == 11,
                           "y",
                           "n")) %>% 
  group_by(aerosol) %>% 
  filter(class != "cloud") %>% 
  summarize(n_tot = n()) %>% 
  kable()
aerosol n_tot
n 506
y 83

And let’s look to see when the instances of high aerosol are:

LS8_for_class_analysis %>% 
  mutate(aerosol = if_else(str_sub(SR_QA_AEROSOL_binary, 1, 2) == 11,
                           "y",
                           "n")) %>%
  filter(aerosol == "y") %>% 
  group_by(date) %>% 
  filter(class != "cloud") %>% 
  summarize(n_tot = n()) %>% 
  arrange(-n_tot) %>% 
  kable()
date n_tot
2020-08-11 32
2017-09-04 23
2022-08-01 14
2014-06-08 5
2022-11-21 5
2016-09-01 3
2020-07-10 1

Let’s look at the 2020-08-11 and 2017-09-04 images. First 2020-08-11:

This image is clear as day, but if you zoom in near the Apostle Islands, you can see the haze. As suggested by the USGS, I think it is fine to just toss labels where aerosol is high.

And 2017-09-04:

Woah! I understand now why there might be some algae bloom labels in this dataset. This is very hazy - I’m also interested in the scene quality here:

LS8_for_class_analysis %>% 
  filter(date == "2017-09-04") %>% 
  pluck("IMAGE_QUALITY_OLI_list") %>% 
  unique() %>% 
  unlist()
## [1] "9"

Well that’s surprising. I guess this is truly an instance where we’re going to have to trust the LS8 Aerosol bit and mask out all high aerosol pixels and toss all labels that are flagged with high aerosol. This scene has 23 of 40 labels that are flagged as high aerosol, but I argue anything in this scene should be tossed.

Training dataset implications

For the purposes of training data, we are going to throw out the outlier scene on 2022-11-21, the high aerosol scene 2017-09-04,

LS8_training_labels <- LS8_for_class_analysis %>% 
  # drop the scene with outliers and high aerosol
  filter(!(date %in% c("2022-11-21", "2017-09-04")),
         # drop all the labels with high aerosol, unless the class is cloud
         (str_sub(SR_QA_AEROSOL_binary, 1, 2) != 11 | class == "cloud"))

Testing for inter-class differences

We do want to have an idea of how different the classes are, in regards to band data. While there are a bunch of interactions that we could get into here, for the sake of this analysis, we are going to analyze the class differences by band.

Kruskal-Wallis assumptions:

  1. Data are non-Normal or have a skewed distribution
  2. There must be at least two independent groups.
  3. Data have a similar distribution across groups.
  4. Data are independent, the groups shouldn’t have a relationship to one another
  5. Each group should contain at least 5 observations

ANOVA assumptions:

  1. data are distributed normally
  2. data are independent
  3. variance across groups is similar

We can’t entirely assert sample independence and we know that variance and distribution is different for “cloud” labels, but those data also are visibly different from the other classes.

In order to systematically test for differences between classes and be able to intepret the data, we will need to know some things about our data:

  1. Are the data normally distributed (Shapiro-Wilkes)?
  2. Are there outliers that may impact interpretation?
  3. If data is non-normal, perform Kruskal-Walis test; otherwise ANOVA
  4. if the null is rejected (and there is a difference in at least one class), perform post-hoc test for pairwise comparison (Dunn test for both)

With this workflow, most classes are statistically different - below are the cases where the pairwise comparison were not deemed statistically significant:

## # A tibble: 18 × 9
##    band  group1         group2    n1    n2 statistic       p  p.adj p.adj.signif
##    <chr> <chr>          <chr>  <int> <int>     <dbl>   <dbl>  <dbl> <chr>       
##  1 SR_B2 darkNearShore… light…   110   163    1.74   0.0823  0.823  ns          
##  2 SR_B2 darkNearShore… offSh…   110   154   -2.75   0.00595 0.0595 ns          
##  3 SR_B3 darkNearShore… light…   110   163    2.11   0.0345  0.345  ns          
##  4 SR_B4 darkNearShore… light…   110   163    0.0513 0.959   1      ns          
##  5 SR_B5 darkNearShore… light…   110   163   -1.65   0.0997  0.997  ns          
##  6 SR_B5 offShoreSedim… openW…   154    70   -0.0105 0.992   1      ns          
##  7 SR_B6 darkNearShore… light…   110   163   -0.443  0.658   1      ns          
##  8 SR_B6 darkNearShore… offSh…   110   154   -2.62   0.00881 0.0881 ns          
##  9 SR_B6 darkNearShore… openW…   110    70   -2.41   0.0160  0.160  ns          
## 10 SR_B6 lightNearShor… offSh…   163   154   -2.42   0.0154  0.154  ns          
## 11 SR_B6 lightNearShor… openW…   163    70   -2.20   0.0281  0.281  ns          
## 12 SR_B6 offShoreSedim… openW…   154    70   -0.288  0.774   1      ns          
## 13 SR_B7 darkNearShore… light…   110   163    0.103  0.918   1      ns          
## 14 SR_B7 darkNearShore… offSh…   110   154   -1.69   0.0907  0.907  ns          
## 15 SR_B7 darkNearShore… openW…   110    70   -1.98   0.0481  0.481  ns          
## 16 SR_B7 lightNearShor… offSh…   163   154   -1.99   0.0463  0.463  ns          
## 17 SR_B7 lightNearShor… openW…   163    70   -2.20   0.0276  0.276  ns          
## 18 SR_B7 offShoreSedim… openW…   154    70   -0.631  0.528   1      ns

Alright, all over the map here - dark near shore is still a problem, but also some issues with offshore sediment, open water, and light near shore sediment. This could be problematic. We’ll have to see how these data look and hope that ML can pick up on the subtle differences.

DNSS: dark near shore sediment, LNSS: light near shore sediment, OSS: offshore sediment

DNSS: dark near shore sediment, LNSS: light near shore sediment, OSS: offshore sediment

There are definitely some varying patterns here, let’s zoom in on the sediment classes.

DNSS: dark near shore sediment, LNSS: light near shore sediment, OSS: offshore sediment

DNSS: dark near shore sediment, LNSS: light near shore sediment, OSS: offshore sediment

Hmm, this is a true scatter shot. It will be interesting to see what happens in development and application.

Aggregating sediment classes and performing statistical tests

As a back up, we should consider using aggregated sediment classes, where any label of sediment is treated as a general class of “sediment”. Let’s do the same process here to test for class significance.

## # A tibble: 2 × 9
##   band  group1    group2      n1    n2 statistic      p p.adj p.adj.signif
##   <chr> <chr>     <chr>    <int> <int>     <dbl>  <dbl> <dbl> <chr>       
## 1 SR_B6 openWater sediment    70   427      1.78 0.0749 0.225 ns          
## 2 SR_B7 openWater sediment    70   427      1.79 0.0734 0.220 ns

And let’s look at the scatter plots here:

And if we drop the cloud:

LS8_training_labels %>% 
  filter(agg_class != "cloud") %>% 
ggpairs(., columns = LS89_ee, aes(color = agg_class)) + 
  scale_color_colorblind() +
  scale_fill_colorblind() +
  theme_few()

These seem pretty recognizable in visible band space, but maybe not otherwise.

Export the training labels

Things to note for Landsat 8:

  • classes are less differentiable statistically within bands than LS5 and 7 - need to be thoughtful about this when applying the models. This may require more post-hoc testing to add certainty to output.
  • bright cloud cover and snow may impact Rrs within the waterbody leading to outliers. will need to be cautious applying the algo when snow is on the ground!
  • must mask high aerosol pixels, as they will get labeled as something else entirely because high aerosol results in green-hued areas of scenes.
  • We may need to aggregate the sediment into a single class for reliable results
write_rds(LS8_training_labels, paste0("data/labels/LS8_labels_for_tvt_", outlier_version, ".RDS"))